本文为可以提取车辆间交互的自治车辆提供特定于自主车辆的驾驶员风险识别框架。在驾驶员认知方式下对城市驾驶场景进行了这种提取,以提高风险场景的识别准确性。首先,将群集分析应用于驱动程序的操作数据,以学习不同驱动程序风险场景的主观评估,并为每个场景生成相应的风险标签。其次,采用图形表示模型(GRM)统一和构建动态车辆,车间交互和静态交通标记的实际驾驶场景中的特征。驾驶员特定的风险标签提供了实践,以捕获不同司机的风险评估标准。此外,图形模型表示驾驶场景的多个功能。因此,所提出的框架可以了解不同驱动程序的驾驶场景的风险评估模式,并建立特定于驱动程序的风险标识符。最后,通过使用由多个驱动程序收集的现实世界城市驾驶数据集进行的实验评估所提出的框架的性能。结果表明,建议的框架可以准确地识别实际驾驶环境中的风险及其水平。
translated by 谷歌翻译
文本到图像生成旨在生成与给定文本一致的真实图像。先前的作品主要通过堆叠生成器 - 歧义器对进行多个对抗训练,主要采用多阶段体系结构,在该培训中,用于提供发电指导的文本语义在所有阶段都保持静态。这项工作认为,每个阶段的文本特征应根据历史阶段的状态(即历史阶段的文本和图像特征)进行自适应重新组合,以在粗到精细的生成过程中提供多样化和准确的语义指导。因此,我们提出了一种新颖的动力学语义演化gan(DSE-GAN),以在新颖的单一对抗性多阶段体系结构下重新构成每个阶段的文本特征。具体而言,我们设计(1)动态语义演化(DSE)模块,该模块首先汇总了历史图像特征以总结生成反馈,然后动态选择在每个阶段重新组装的单词,并通过动态地组装它们增强或抑制不同的粒度子空间的语义。 (2)单个对抗性多阶段体系结构(SAMA),通过消除复杂的多个对抗训练要求扩展了先前的结构,因此可以允许更多的文本图像相互作用阶段,并最终促进DSE模块。我们进行了全面的实验,并表明DSE-GAN在两个广泛使用的基准分别(即CUB-200和MSCOCO)上获得了7.48 \%和37.8%的相对FID。
translated by 谷歌翻译
点击率(CTR)预测是推荐和广告系统中的基本技术。最近的研究证明,学习一个为多个领域服务的统一模型可有效提高整体性能。但是,在有限的培训数据下,改善跨领域的概括,并且由于其计算复杂性而难以部署当前解决方案仍然是一项挑战。在本文中,我们为多域CTR预测提出了一个简单而有效的框架ADASPARSE,该预测学习了每个域的适应性稀疏结构,从而在跨计算成本较低的域中实现了更好的概括。在Adasparse中,我们引入了域感知的神经元的加权因子来测量神经元的重要性,对于每个域而言,我们的模型可以修剪冗余神经元以改善概括。我们进一步添加了灵活的稀疏性正常,以控制学习结构的稀疏性比。离线和在线实验表明,ADASPARSE的表现高于先前的多域CTR模型。
translated by 谷歌翻译
转移学习通过利用特定源任务的数据来提高目标任务的性能:源和目标任务之间的关系越接近,通过转移学习的绩效提高越大。在神经科学中,认知任务之间的关系通常由激活的大脑区域或神经表示的相似性表示。但是,没有研究将转移学习和神经科学联系起来,以揭示认知任务之间的关系。在这项研究中,我们提出了一个转移学习框架,以反映认知任务之间的关系,并比较通过转移学习和大脑区域(例如Neurosynth)反映的任务关系。我们的转移学习结果创建了认知任务,以反映认知任务之间的关系,这与来自神经合成的任务关系非常一致。如果源和目标认知任务激活相似的大脑区域,则转移学习在任务解码方面的性能更好。我们的研究发现了多个认知任务的关系,并为基于小样本数据的神经解码转移学习中的源任务选择提供了指导。
translated by 谷歌翻译
自我监督的学习已经为单眼深度估计显示出非常有希望的结果。场景结构和本地细节都是高质量深度估计的重要线索。最近的作品遭受了场景结构的明确建模,并正确处理细节信息,这导致了预测结果中的性能瓶颈和模糊人工制品。在本文中,我们提出了具有两个有效贡献的通道 - 明智的深度估计网络(Cadepth-Net):1)结构感知模块采用自我关注机制来捕获远程依赖性并聚合在信道中的识别特征尺寸,明确增强了场景结构的感知,获得了更好的场景理解和丰富的特征表示。 2)细节强调模块重新校准通道 - 方向特征映射,并选择性地强调信息性功能,旨在更有效地突出至关重要的本地细节信息和熔断器不同的级别功能,从而更精确,更锐化深度预测。此外,广泛的实验验证了我们方法的有效性,并表明我们的模型在基蒂基准和Make3D数据集中实现了最先进的结果。
translated by 谷歌翻译
模式识别中的一个挑战是开放式识别。与封闭式识别相比,开放式识别不仅需要减少经验风险,也需要降低开放空间风险,并且对这两个风险的减少对应于分别分配已知类别并分别识别未知类。如何降低开放空间风险是开放式识别的关键。本文通过分析已知和未知类功能的分布来探讨开放空间风险的起源。在此基础上,提出了空间位置约束原型丢失功能,以同时减少两个风险。在多个基准数据集和许多可视化结果上的广泛实验表明我们的方法优于最多现有的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译